Incentive-Aware PAC Learning

نویسندگان

چکیده

We study PAC learning in the presence of strategic manipulation, where data points may modify their features certain predefined ways order to receive a better outcome. show that vanilla ERM principle fails achieve any nontrivial guarantee this context. Instead, we propose an incentive-aware version which has asymptotically optimal sample complexity. then focus our attention on incentive-compatible classifiers, provably prevent kind manipulation. give complexity bound is, curiously, independent hypothesis class, for restricted classifiers. This suggests incentive compatibility alone can act as effective means regularization. further it is without loss generality consider only classifiers when opportunities manipulation satisfy transitivity condition. As consequence, such cases, hypothesis-class-independent applies even compatibility. Our results set foundations learning.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Incentive-Aware Learning for Large Markets∗

In a typical learning problem, the goal is to use training data to pick one model from a collection of models that optimizes an objective function. In many multiagent settings, the training data is generated through the actions of the agents, and the model is used to make a decision (e.g., how to sell an item) that affects the agents. An illustrative example of this is the problem of learning t...

متن کامل

PAC-Bayesian learning bounds

with respect to θ ∈ Θ. As we assume that N is potentially very large, we will draw at random some independent identically distributed sample (W1, . . . ,Wn) according to the uniform distribution on { w1, . . . , wN } , where the size n of the statistical sample corresponds to the amount of computations we are ready to make. This sample will be used to choose the parameters. Although in this sce...

متن کامل

Collaborative PAC Learning

We introduce the collaborative PAC learning model, in which k players attempt to learn the same underlying concept. We ask how much more information is required to learn an accurate classifier for all players simultaneously. We refer to the ratio between the sample complexity of collaborative PAC learning and its non-collaborative (single-player) counterpart as the overhead. We design learning ...

متن کامل

25.1 Pac Learning

In this framework, a learning algorithm has off-line access to what we can call a “training set”. This set consists of 〈element, value〉 pairs, where each element belongs to the domain and value is the concept evaluated on that element. We say that the algorithm can learn the concept if, when we execute it on the training set, it outputs a hypothesis that is consistent with the entire training s...

متن کامل

A. Pac Learning

• Let X=R with orthonormal basis (e1, e2) and consider the set of concepts defined by the area inside a right triangle ABC with two sides parallel to the axes, with −−→ AB/AB = e1 and −→ AC/AC = e2, and AB/AC = α for some positive real α ∈ R+. Show, using similar methods to those used in the lecture slides for the axis-aligned rectangles, that this class can be (ǫ, δ)PAC-learned from training d...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2021

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v35i6.16726